Defending Against Speculative Attacks: Reputation, Learning, and Coordination

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Defending Against Speculative Attacks: Reputation, Learning, and Coordination

How does the central bank’s incentive to build a reputation affect speculators’ ability to coordinate and the likelihood of the devaluation outcome during speculative currency crises? What role does market information play in speculators’ coordination and the central bank’s reputation building? I address these questions in a dynamic regime change game that highlights the interaction between the...

متن کامل

Defending against speculative attacks when market opinions diverge

In a stylized continuous-time model of a small open economy with perfect capital mobility and a pegged exchange rate, I analyze an experiment in which domestic residents expect a future devaluation. Taking the public’s expectations as exogenous and allowing them to be divergent, I analyze in a short time period before the expected devaluation takes place how transactions of short-term collatera...

متن کامل

Defending Non-Bayesian Learning against Adversarial Attacks

Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local le...

متن کامل

Defending BitTorrent against Strategic Attacks

BitTorrent has shown to be efficient for bulk file transfer, however, it is susceptible to free riding by strategic clients like BitTyrant. Strategic peers configure the client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an u...

متن کامل

Auror: defending against poisoning attacks in collaborative deep learning systems

Deep learning in a collaborative setting is emerging as a cornerstone of many upcoming applications, wherein untrusted users collaborate to generate more accurate models. From the security perspective, this opens collaborative deep learning to poisoning attacks, wherein adversarial users deliberately alter their inputs to mis-train the model. These attacks are known for machine learning systems...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SSRN Electronic Journal

سال: 2011

ISSN: 1556-5068

DOI: 10.2139/ssrn.1960673